人类可以利用身体互动来教机器人武器。这种物理互动取决于任务,用户以及机器人到目前为止所学的内容。最先进的方法专注于从单一模态学习,或者假设机器人具有有关人类预期任务的先前信息,从而结合了多个互动类型。相比之下,在本文中,我们介绍了一种算法形式主义,该算法从演示,更正和偏好中学习。我们的方法对人类想要教机器人的任务没有任何假设。取而代之的是,我们通过将人类的输入与附近的替代方案进行比较,从头开始学习奖励模型。我们首先得出损失函数,该功能训练奖励模型的合奏,以匹配人类的示范,更正和偏好。反馈的类型和顺序取决于人类老师:我们使机器人能够被动地或积极地收集此反馈。然后,我们应用受约束的优化将我们学习的奖励转换为所需的机器人轨迹。通过模拟和用户研究,我们证明,与现有基线相比,我们提出的方法更准确地从人体互动中学习了操纵任务,尤其是当机器人面临新的或意外的目标时。我们的用户研究视频可在以下网址获得:https://youtu.be/fsujstyveku
translated by 谷歌翻译
Spurious correlations in training data often lead to robustness issues since models learn to use them as shortcuts. For example, when predicting whether an object is a cow, a model might learn to rely on its green background, so it would do poorly on a cow on a sandy background. A standard dataset for measuring state-of-the-art on methods mitigating this problem is Waterbirds. The best method (Group Distributionally Robust Optimization - GroupDRO) currently achieves 89\% worst group accuracy and standard training from scratch on raw images only gets 72\%. GroupDRO requires training a model in an end-to-end manner with subgroup labels. In this paper, we show that we can achieve up to 90\% accuracy without using any sub-group information in the training set by simply using embeddings from a large pre-trained vision model extractor and training a linear classifier on top of it. With experiments on a wide range of pre-trained models and pre-training datasets, we show that the capacity of the pre-training model and the size of the pre-training dataset matters. Our experiments reveal that high capacity vision transformers perform better compared to high capacity convolutional neural networks, and larger pre-training dataset leads to better worst-group accuracy on the spurious correlation dataset.
translated by 谷歌翻译
The primary obstacle to developing technologies for low-resource languages is the lack of representative, usable data. In this paper, we report the deployment of technology-driven data collection methods for creating a corpus of more than 60,000 translations from Hindi to Gondi, a low-resource vulnerable language spoken by around 2.3 million tribal people in south and central India. During this process, we help expand information access in Gondi across 2 different dimensions (a) The creation of linguistic resources that can be used by the community, such as a dictionary, children's stories, Gondi translations from multiple sources and an Interactive Voice Response (IVR) based mass awareness platform; (b) Enabling its use in the digital domain by developing a Hindi-Gondi machine translation model, which is compressed by nearly 4 times to enable it's edge deployment on low-resource edge devices and in areas of little to no internet connectivity. We also present preliminary evaluations of utilizing the developed machine translation model to provide assistance to volunteers who are involved in collecting more data for the target language. Through these interventions, we not only created a refined and evaluated corpus of 26,240 Hindi-Gondi translations that was used for building the translation model but also engaged nearly 850 community members who can help take Gondi onto the internet.
translated by 谷歌翻译
The use of emojis affords a visual modality to, often private, textual communication. The task of predicting emojis however provides a challenge for machine learning as emoji use tends to cluster into the frequently used and the rarely used emojis. Much of the machine learning research on emoji use has focused on high resource languages and has conceptualised the task of predicting emojis around traditional server-side machine learning approaches. However, traditional machine learning approaches for private communication can introduce privacy concerns, as these approaches require all data to be transmitted to a central storage. In this paper, we seek to address the dual concerns of emphasising high resource languages for emoji prediction and risking the privacy of people's data. We introduce a new dataset of $118$k tweets (augmented from $25$k unique tweets) for emoji prediction in Hindi, and propose a modification to the federated learning algorithm, CausalFedGSD, which aims to strike a balance between model performance and user privacy. We show that our approach obtains comparative scores with more complex centralised models while reducing the amount of data required to optimise the models and minimising risks to user privacy.
translated by 谷歌翻译
Generalization is an important attribute of machine learning models, particularly for those that are to be deployed in a medical context, where unreliable predictions can have real world consequences. While the failure of models to generalize across datasets is typically attributed to a mismatch in the data distributions, performance gaps are often a consequence of biases in the 'ground-truth' label annotations. This is particularly important in the context of medical image segmentation of pathological structures (e.g. lesions), where the annotation process is much more subjective, and affected by a number underlying factors, including the annotation protocol, rater education/experience, and clinical aims, among others. In this paper, we show that modeling annotation biases, rather than ignoring them, poses a promising way of accounting for differences in annotation style across datasets. To this end, we propose a generalized conditioning framework to (1) learn and account for different annotation styles across multiple datasets using a single model, (2) identify similar annotation styles across different datasets in order to permit their effective aggregation, and (3) fine-tune a fully trained model to a new annotation style with just a few samples. Next, we present an image-conditioning approach to model annotation styles that correlate with specific image features, potentially enabling detection biases to be more easily identified.
translated by 谷歌翻译
柔性章鱼臂具有卓越的能力,可以协调大量自由度并执行复杂的操纵任务。结果,这些系统继续吸引生物学家和机器人的注意力。在本文中,我们开发了一个三维模型的软章鱼臂,配备了生物力学上逼真的肌肉致动。考虑了所有主要肌肉群施加的内力和夫妇。描述了一种能量塑形控制方法来协调肌肉活动,以便在3D空间中掌握和触及。本文的主要贡献是:(i)主要肌肉群建模以引起三维运动; (ii)基于存储的能量功能的肌肉激活的数学公式; (iii)通过在特殊欧几里得组SE中解决优化问题获得的设计特定于任务的平衡配置的计算有效过程(3)。然后,根据优化问题解决方案引起的共同状态变量,对肌肉控制进行迭代计算。该方法在物理准确的软件环境弹性中得到了数值的证明。报告了模拟观察到的章鱼行为的数值实验的结果。
translated by 谷歌翻译
机器学习(ML)算法在帮助不同学科和机构的科学社区解决大型和多样化的数据问题方面表现出了增长的趋势。但是,许多可用的ML工具在编程方面要求且计算成本高昂。 MlexChange项目旨在建立一个配备有能力工具的协作平台,该平台使科学家和设施使用者没有深刻的ML背景来使用ML和计算资源进行科学发现。在高水平上,我们针对完整的用户体验,在该体验中,可以通过Web应用程序可以轻松获得管理和交换ML算法,工作流和数据。到目前为止,我们已经构建了四个主要组件,即中央职位管理器,集中式内容注册表,用户门户和搜索引擎,并成功地将这些组件部署到了测试服务器上。由于每个组件都是一个独立的容器,因此可以轻松地在不同尺度的服务器上部署整个平台或其个人服务,从笔记本电脑(通常是单个用户)到高性能群集(HPC)(同时)通过许多用户。因此,MlexChange使用方案使灵活性变得灵活 - 用户可以从远程服务器访问服务和资源,也可以在其本地网络中运行整个平台或其个人服务。
translated by 谷歌翻译
移动通知已成为社交网络服务的主要通信渠道,以使用户了解和参与。随着越来越多的移动应用程序向用户推出通知,他们不断面临关于发送什么,何时以及如何发送的决定。缺乏研究和方法论通常会导致启发式决策。许多通知到达不适当的时刻或引入太多中断,未能为用户提供价值并激发用户的投诉。在本文中,我们探讨了移动通知和用户参与度之间交互的独特功能。我们提出了一个国家过渡框架,以定量评估通知的有效性。在此框架内,我们开发了一个假设对数线性结构和Weibull分布的徽章通知的生存模型。我们的结果表明,与逻辑回归模型相比,该模型对应用程序的灵活性和卓越的预测准确性具有更大的灵活性。特别是,我们提供了一个在线用例,以进行通知交付时间优化,以显示我们如何做出更好的决策,推动更多用户参与度并为用户提供更多价值。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
基于坐标的神经网络参数化隐式表面已成为几何形状的有效表示。它们有效地充当参数水平集,其零级集合定义了感兴趣的表面。我们提出了一个框架,该框架允许将定义的三角形网格定义的变形操作应用于此类隐式表面。这些操作中的几个可以看作是能量最小化的问题,这些问题会诱导显式表面上的瞬时流场。我们的方法使用流场通过扩展级别集的经典理论来变形参数隐式表面。我们还通过形式化与级别集理论的联系,来得出有关可区分表面提取和渲染的现有方法的合并视图。我们表明,这些方法从理论中偏离,我们的方法对诸如表面平滑,均值流动,反向渲染和用户定义的编辑等应用进行了改进。
translated by 谷歌翻译